576 research outputs found

    Understanding and Overcoming the Challenges Related to Cardiovascular Trials Involving Patients with Kidney Disease.

    Get PDF
    Cardiovascular disease is a prevalent and prognostically important comorbidity among patients with kidney disease, and individuals with kidney disease make up a sizeable proportion (30%-60%) of patients with cardiovascular disease. However, several systematic reviews of cardiovascular trials have observed that patients with kidney disease, particularly those with advanced kidney disease, are often excluded from trial participation. Thus, currently available trial data for cardiovascular interventions in patients with kidney disease may be insufficient to make recommendations on the optimal approach for many therapies. The Kidney Health Initiative, a public-private partnership between the American Society of Nephrology and the US Food and Drug Administration, convened a multidisciplinary, international work group and hosted a stakeholder workshop intended to understand and develop strategies for overcoming the challenges with involving patients with kidney disease in cardiovascular clinical trials, with a particular focus on those with advanced disease. These efforts considered perspectives from stakeholders, including academia, industry, contract research organizations, regulatory agencies, patients, and care partners. This article outlines the key challenges and potential solutions discussed during the workshop centered on the following areas for improvement: building the business case, re-examining study design and implementation, and changing the clinical trial culture in nephrology. Regulatory and financial incentives could serve to mitigate financial concerns with involving patients with kidney disease in cardiovascular trials. Concerns that their inclusion could affect efficacy or safety results could be addressed through thoughtful approaches to study design and risk mitigation strategies. Finally, there is a need for closer collaboration between nephrologists and cardiologists and systemic change within the nephrology community such that participation of patients with kidney disease in clinical trials is prioritized. Ultimately, greater participation of patients with kidney disease in cardiovascular trials will help build the evidence base to guide optimal management of cardiovascular disease for this population

    The conceptualisation of health and disease in veterinary medicine

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The concept of health, as well as the concept of disease, is central in veterinary medicine. However, the definitions "health" and "disease" are not generally acknowledged by veterinarians. The aim of this study was to examine how the concepts "health" and "disease" are defined in veterinary textbooks.</p> <p>Methods</p> <p>Veterinary textbooks in several disciplines were investigated, but only textbooks with explicit definitions of the concepts were selected for examination.</p> <p>Results</p> <p>Eighty out of the 500 relevant books within veterinary medicine were written for non-veterinarians. Eight percent of the books had an explicit definition of health and/or disease. More frequently, textbooks written for non veterinarians did have definitions of health or disease, compared to textbooks written for professionals. A division of health definitions in five different categories was suggested, namely:</p> <p>1. Health as normality, 2. Health as biological function, 3. Health as homeostasis, 4. Health as physical and psychological well-being and 5. Health as productivity including reproduction.</p> <p>Conclusion</p> <p>Few veterinary textbooks had any health or disease definition at all. Furthermore, explicit definitions of health stated by the authors seemed to have little impact on how health and disease are handled within the profession. Veterinary medicine would probably gain from theoretical discussions about health and disease.</p

    Psychometric evaluation of a newly developed measure of emotionalism after stroke (TEARS-Q)

    Get PDF
    Objective: To evaluate, psychometrically, a new measure of tearful emotionalism following stroke: Testing Emotionalism After Recent Stroke – Questionnaire (TEARS-Q). Setting: Acute stroke units based in nine Scottish hospitals, in the context of a longitudinal cohort study of post-stroke emotionalism. Subjects: A total of 224 clinically diagnosed stroke survivors recruited between October 1st 2015 and September 30th 2018, within 2 weeks of their stroke. Measures: The measure was the self-report questionnaire TEARS-Q, constructed based on post-stroke tearful emotionalism diagnostic criteria: (i) increased tearfulness, (ii) crying comes on suddenly, with no warning (iii) crying not under usual social control and (iv) crying episodes occur at least once weekly. The reference standard was presence/absence of emotionalism on a diagnostic, semi-structured post-stroke emotionalism interview, administered at the same assessment point. Stroke, mood, cognition and functional outcome measures were also completed by the subjects. Results: A total of 97 subjects were female, with a mean age 65.1 years. 205 subjects had sustained ischaemic stroke. 61 subjects were classified as mild stroke. TEARS-Q was internally consistent (Cronbach’s alpha 0.87). TEARS-Q scores readily discriminated the two groups, with a mean difference of −7.18, 95% CI (−8.07 to −6.29). A cut off score of 2 on TEARS-Q correctly identified 53 of the 61 stroke survivors with tearful emotionalism and 140 of the 156 stroke survivors without tearful emotionalism. One factor accounted for 57% of the item response variance, and all eight TEARS-Q items acceptably discriminated underlying emotionalism. Conclusion: TEARS-Q accurately diagnoses tearful emotionalism after stroke

    An Evaluation of Methods for Inferring Boolean Networks from Time-Series Data

    Get PDF
    Regulatory networks play a central role in cellular behavior and decision making. Learning these regulatory networks is a major task in biology, and devising computational methods and mathematical models for this task is a major endeavor in bioinformatics. Boolean networks have been used extensively for modeling regulatory networks. In this model, the state of each gene can be either ‘on’ or ‘off’ and that next-state of a gene is updated, synchronously or asynchronously, according to a Boolean rule that is applied to the current-state of the entire system. Inferring a Boolean network from a set of experimental data entails two main steps: first, the experimental time-series data are discretized into Boolean trajectories, and then, a Boolean network is learned from these Boolean trajectories. In this paper, we consider three methods for data discretization, including a new one we propose, and three methods for learning Boolean networks, and study the performance of all possible nine combinations on four regulatory systems of varying dynamics complexities. We find that employing the right combination of methods for data discretization and network learning results in Boolean networks that capture the dynamics well and provide predictive power. Our findings are in contrast to a recent survey that placed Boolean networks on the low end of the ‘‘faithfulness to biological reality’’ and ‘‘ability to model dynamics’’ spectra. Further, contrary to the common argument in favor of Boolean networks, we find that a relatively large number of time points in the timeseries data is required to learn good Boolean networks for certain data sets. Last but not least, while methods have been proposed for inferring Boolean networks, as discussed above, missing still are publicly available implementations thereof. Here, we make our implementation of the methods available publicly in open source at http://bioinfo.cs.rice.edu/

    Support and Assessment for Fall Emergency Referrals (SAFER 1) trial protocol. Computerised on-scene decision support for emergency ambulance staff to assess and plan care for older people who have fallen: evaluation of costs and benefits using a pragmatic cluster randomised trial

    Get PDF
    Background: Many emergency ambulance calls are for older people who have fallen. As half of them are left at home, a community-based response may often be more appropriate than hospital attendance. The SAFER 1 trial will assess the costs and benefits of a new healthcare technology - hand-held computers with computerised clinical decision support (CCDS) software - to help paramedics decide who needs hospital attendance, and who can be safely left at home with referral to community falls services. Methods/Design: Pragmatic cluster randomised trial with a qualitative component. We shall allocate 72 paramedics ('clusters') at random between receiving the intervention and a control group delivering care as usual, of whom we expect 60 to complete the trial. Patients are eligible if they are aged 65 or older, live in the study area but not in residential care, and are attended by a study paramedic following an emergency call for a fall. Seven to 10 days after the index fall we shall offer patients the opportunity to opt out of further follow up. Continuing participants will receive questionnaires after one and 6 months, and we shall monitor their routine clinical data for 6 months. We shall interview 20 of these patients in depth. We shall conduct focus groups or semi-structured interviews with paramedics and other stakeholders. The primary outcome is the interval to the first subsequent reported fall (or death). We shall analyse this and other measures of outcome, process and cost by 'intention to treat'. We shall analyse qualitative data thematically. Discussion: Since the SAFER 1 trial received funding in August 2006, implementation has come to terms with ambulance service reorganisation and a new national electronic patient record in England. In response to these hurdles the research team has adapted the research design, including aspects of the intervention, to meet the needs of the ambulance services. In conclusion this complex emergency care trial will provide rigorous evidence on the clinical and cost effectiveness of CCDS for paramedics in the care of older people who have fallen

    A new multicompartmental reaction-diffusion modeling method links transient membrane attachment of E. coli MinE to E-ring formation

    Get PDF
    Many important cellular processes are regulated by reaction-diffusion (RD) of molecules that takes place both in the cytoplasm and on the membrane. To model and analyze such multicompartmental processes, we developed a lattice-based Monte Carlo method, Spatiocyte that supports RD in volume and surface compartments at single molecule resolution. Stochasticity in RD and the excluded volume effect brought by intracellular molecular crowding, both of which can significantly affect RD and thus, cellular processes, are also supported. We verified the method by comparing simulation results of diffusion, irreversible and reversible reactions with the predicted analytical and best available numerical solutions. Moreover, to directly compare the localization patterns of molecules in fluorescence microscopy images with simulation, we devised a visualization method that mimics the microphotography process by showing the trajectory of simulated molecules averaged according to the camera exposure time. In the rod-shaped bacterium _Escherichia coli_, the division site is suppressed at the cell poles by periodic pole-to-pole oscillations of the Min proteins (MinC, MinD and MinE) arising from carefully orchestrated RD in both cytoplasm and membrane compartments. Using Spatiocyte we could model and reproduce the _in vivo_ MinDE localization dynamics by accounting for the established properties of MinE. Our results suggest that the MinE ring, which is essential in preventing polar septation, is largely composed of MinE that is transiently attached to the membrane independently after recruited by MinD. Overall, Spatiocyte allows simulation and visualization of complex spatial and reaction-diffusion mediated cellular processes in volumes and surfaces. As we showed, it can potentially provide mechanistic insights otherwise difficult to obtain experimentally

    A danger of low copy numbers for inferring incorrect cooperativity degree

    Get PDF
    Background: A dose-response curve depicts fraction of bound proteins as a function of unbound ligands. Dose-response curves are used to measure the cooperativity degree of a ligand binding process. Frequently, the Hill function is used to fit the experimental data. The Hill function is parameterized by the value of the dissociation constant, and the Hill coefficient which describes the cooperativity degree. The use of Hill's model and the Hill function have been heavily criticised in this context, predominantly the assumption that all ligands bind at once, which lead to further refinements of the model. In this work, the validity of the Hill function has been studied from an entirely different point of view. In the limit of low copy numbers the dynamics of the system becomes noisy. The goal was to asses the validity of the Hill function in this limit, and to see in which ways the effects of the fluctuations change the form of the dose-response curves. Results: Dose-response curves were computed taking into account effects of fluctuations. The effects of fluctuations were described at the lowest order (the second moment of the particle number distribution) by using previously developed Pair Approach Reaction Noise EStimator (PARNES) method. The stationary state of the system is described by nine equations with nine unknowns. To obtain fluctuation corrected dose-response curves the equations have been investigated numerically. Conclusions: The Hill function cannot describe dose-response curves in a low particle limit. First, dose-response curves are not solely parameterized by the dissociation constant and the Hill coefficient. In general, the shape of a dose-response curve depends on the variables that describe how an experiment (ensemble) is designed. Second, dose-response curves are multi valued in a rather non-trivial way

    Global parameter estimation methods for stochastic biochemical systems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness) or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data.</p> <p>Results</p> <p>Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality.</p> <p>Conclusions</p> <p>The parameter estimation methodologies described in this work have provided an effective and practical approach in the estimation of kinetic parameters of stochastic systems from either sparse or dense cell population data. Nevertheless, similar to kinetic parameter estimation in other modelling frameworks, not all parameters can be estimated accurately, which is a common problem arising from the lack of complete parameter identifiability from the available data.</p

    Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems

    Get PDF
    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. © 2014 Hogg et al

    Recommendations for a core outcome set for measuring standing balance in adult populations: a consensus-based approach

    Get PDF
    Standing balance is imperative for mobility and avoiding falls. Use of an excessive number of standing balance measures has limited the synthesis of balance intervention data and hampered consistent clinical practice.To develop recommendations for a core outcome set (COS) of standing balance measures for research and practice among adults.A combination of scoping reviews, literature appraisal, anonymous voting and face-to-face meetings with fourteen invited experts from a range of disciplines with international recognition in balance measurement and falls prevention. Consensus was sought over three rounds using pre-established criteria.The scoping review identified 56 existing standing balance measures validated in adult populations with evidence of use in the past five years, and these were considered for inclusion in the COS.Fifteen measures were excluded after the first round of scoring and a further 36 after round two. Five measures were considered in round three. Two measures reached consensus for recommendation, and the expert panel recommended that at a minimum, either the Berg Balance Scale or Mini Balance Evaluation Systems Test be used when measuring standing balance in adult populations.Inclusion of two measures in the COS may increase the feasibility of potential uptake, but poses challenges for data synthesis. Adoption of the standing balance COS does not constitute a comprehensive balance assessment for any population, and users should include additional validated measures as appropriate.The absence of a gold standard for measuring standing balance has contributed to the proliferation of outcome measures. These recommendations represent an important first step towards greater standardization in the assessment and measurement of this critical skill and will inform clinical research and practice internationally
    corecore